-
-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected end of ZLIB input stream #203
Comments
I set the environment variable |
Tonight the update of the mirror was successful. I wonder if a parallel read requests locks the file currently written by |
The problem occured now again. I see in the docker logs that the mirror script run twice! It started 00:00:01 and 00:01:06. That's why I also previously saw this warning:
The second run got the problem "Unable to read cached data: /usr/local/apache2/htdocs/nvdcve-2024.json.gz" at 00:01:22. The file was modified last 00:01. So the parallel run might try to read the file which is currently beeing written. Now after reviewing the log files I see that this happend too on 28th when the cache broke last and some days before. On that days at the end there is also this line:
I wonder why the process runs twice just at some days 🤔 Is it a race condition between supervisord and cron? It looks that the additional run is an The real problem is that |
I dig in the Kubernets logs. The pod/container was restarted at that time. The description is:
The memory of the container is not enough. We limit it to |
So this is a memory issue. We've already done a lot to improve this - not sure how much more we can do. |
But maybe you can improve the Dockerfile by removing the fixed
Edit:
|
As far as I know nist-data-mirror downloaded data to a temp directory and only then copied it to apache dir. Probably to avoid such issues. |
This is a follow-up of jeremylong/DependencyCheck#5798
We see this exception recently. Approximately since beginning of july. We use a local mirror in a Kubernetes cluster running open-vulnerability-data-mirror image (currently v6.1.7).
Our findings so far:
I tried running the update manually via /mirror.sh while in an inconsistent state. It failed directly in the container reading its local files. So I guess somehow the update running in the container destroys some files so they are no longer usable. These corrupt files are then propagated to the users of the mirror.
Observations of this weekend: One night the
mirror.sh
was successful in updating the files. The second night it got somehow interrupted and only three files were updated:When running the script directly it shows that the last file
nvdcve-2023.json.gz
is corrupt. That is also the content of the file/var/log/cron_mirror.log
of the last night.Full stacktrace: mirror-exception-zlibstream.txt
So I don't know how the file gets corrupted. But it would be great if
vulnz
would just delete the file and download the current version in thecve
command.The text was updated successfully, but these errors were encountered: